Product Code Database
Example Keywords: grand theft -suit $42-179
barcode-scavenger
   » » Wiki: Stokes' Theorem
Tag Wiki 'Stokes' Theorem'.
Tag

Stokes' theorem,

(2026). 9780538497909, Brooks/Cole. .
also known as the Kelvin–Stokes theorem
(1983). 9784785310394, . .
after and George Stokes, the fundamental theorem for curls, or simply the curl theorem,
(2026). 9780321856562, Pearson.
or rotor theorem is a in on three-dimensional and real coordinate space, \R^3 \mathbb{R}^n (where n is a natural number) is the set of all ordered n-tuples of real numbers, regarded as column vectors. By defining vector addition and scalar multiplication componentwise, \mathbb{R}^n becomes a real vector space equipped with the usual operations of linear algebra. From the viewpoint of matrix operations, \mathbb{R}^n can be identified with \mathcal{M}(n,1,\mathbb{R}), the set of real n\times 1 matrices. Any affine linear function on \mathbb{R}^n can then be written in the form A x + b, where A is a matrix and b is a constant vector. . Given a , the theorem relates the of the curl of the vector field over some surface, to the of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence:
The of a vector field over a loop is equal to the of its curl over the enclosed surface.

Stokes' theorem is a special case of the generalized Stokes theorem.

(2026). 9780817647667, Birkhäuser.
(2026). 9781441999825, Springer.
In particular, a vector field on \R^3 can be considered as a 1-form in which case its curl is its exterior derivative, a 2-form.


Theorem
Let \Sigma be a smooth oriented surface in \R^3, parametrized by \mathbf\Sigma(u,v), with boundary \partial \Sigma \equiv \Gamma , parametrized by \mathbf\Gamma(t). If a vector field
\mathbf{F}(x,y,z) = (F_x(x, y, z), F_y(x, y, z), F_z(x, y, z))
has continuous first-order partial derivatives in \Sigma, then \iint_\Sigma (\nabla \times \mathbf{F}) \cdot d\mathbf{\Sigma} = \oint_{\partial\Sigma} \mathbf{F} \cdot d\mathbf{\Gamma} with the shorthands for the line elment d\mathbf\Gamma = \frac{d\mathbf\Gamma}{dt}dt and the surface element d\mathbf{\Sigma} = \mathbf{n} d\Sigma = \left( \frac{\partial \mathbf\Sigma}{\partial u} \times \frac{\partial \mathbf\Sigma}{\partial v} \right) du dv where \mathbf{n}(u,v) is the vector orthogonal to the surface at the point \mathbf\Sigma(u,v).

More explicitly, with \wedge being the , the equality says that \begin{align} &\iint_\Sigma \left(\left(\frac{\partial F_z}{\partial y}-\frac{\partial F_y}{\partial z} \right)\,\mathrm{d}y\wedge \mathrm{d}z +\left(\frac{\partial F_x}{\partial z}-\frac{\partial F_z}{\partial x}\right)\, \mathrm{d}z\wedge \mathrm{d}x +\left (\frac{\partial F_y}{\partial x}-\frac{\partial F_x}{\partial y}\right)\, \mathrm{d}x\wedge \mathrm{d}y\right) \\ & = \oint_{\partial\Sigma} \Bigl(F_x\, \mathrm{d}x+F_y\, \mathrm{d}y+F_z\, \mathrm{d}z\Bigr). \end{align}

The main challenge in a precise statement of Stokes' theorem is in defining the notion of a boundary. Surfaces such as the , for example, are well-known not to exhibit a Riemann-integrable boundary, and the notion of surface measure in Lebesgue theory cannot be defined for a non-Lipschitz surface. One (advanced) technique is to pass to a and then apply the machinery of geometric measure theory; for that approach see the . In this article, we instead use a more elementary definition, based on the fact that a boundary can be discerned for full-dimensional subsets of \R^2.

A more detailed statement will be given for subsequent discussions. Let \gamma:a,b\to\R^2 be a smooth : a simple closed curve in the plane. The Jordan curve theorem implies that \gamma divides \R^2 into two components, a one and another that is non-compact. Let D denote the compact part; then D is bounded by \gamma. It now suffices to transfer this notion of boundary along a continuous map to our surface in \R^3. But we already have such a map: the parametrization of \Sigma.

Suppose \psi:D\to\R^3 is smooth at the neighborhood of DFrom the definition of D, D is obviously a bounded closed set in \R^2. "A neighborhood of D" means "an open set in \R^2 that contains D."  , with \Sigma=\psi(D).\Sigma=\psi(D) represents the image set of D by \psi If \Gamma is the defined by \Gamma(t)=\psi(\gamma(t))\Gamma may not be a if the loop \gamma interacts poorly with \psi. Nonetheless, \Gamma is always a loop or loops, and topologically a of Jordan curves, so that the integrals are well-defined. then we call \Gamma the boundary of \Sigma, written \partial\Sigma Even if we consider two-dimensional polar coordinates, then if we let ∂Σ = Γ, then ∂Σ can clearly contain a topological interior point of Σ. However, in combinatorial manifolds, this is not such a serious problem, since the line integrals over the interior points cancel out: in an orientable manifold, the line integral over ∂Σ coincides with the line integral over the true boundary.An even more important example is that this theorem also applies to manifolds without a topological boundary, such as the sphere or the torus. In such cases, if ∂Σ = Γ, then ∂Σ is completely contained within Σ. However, in such cases, the resulting line integrals over Γ completely cancel each other out. This allows us to assert that the surface integral of the curl of a vector field on a manifold without a boundary is zero.

With the above notation, if \mathbf{F} is any smooth vector field on \R^3, then

(2026). 9780538497398, Brooks/Cole.
Robert Scheichl, lecture notes for University of Bath mathematics course\oint_{\partial\Sigma} \mathbf{F}\, \cdot\, d{\mathbf{\Gamma}} = \iint_{\Sigma} \nabla\times\mathbf{F}\, \cdot\, d\mathbf{\Sigma}.

Here, the "\cdot" represents the in \R^3.


Special case of a more general theorem
Stokes' theorem can be viewed as a special case of the following identity: \oint_{\partial\Sigma} (\mathbf{F}\, \cdot\, d{\mathbf{\Gamma}})\,\mathbf{g} = \iint_{\Sigma}\left\mathbf{g}, where \mathbf{g} is any smooth vector or scalar field in \mathbb{R}^3. When \mathbf{g} is a uniform scalar field, the standard Stokes' theorem is recovered.


Proof
The proof of the theorem consists of 4 steps. We assume Green's theorem, so what is of concern is how to boil down the three-dimensional complicated problem (Stokes' theorem) to a two-dimensional rudimentary problem (Green's theorem).
(2026). 9780321780652, Pearson. .
When proving this theorem, mathematicians normally deduce it as a special case of a more general result, which is stated in terms of differential forms, and proved using more sophisticated machinery. While powerful, these techniques require substantial background, so the proof below avoids them, and does not presuppose any knowledge beyond a familiarity with basic vector calculus and linear algebra. At the end of this section, a short alternative proof of Stokes' theorem is given, as a corollary of the generalized Stokes' theorem.


Elementary proof

First step of the elementary proof (parametrization of integral)
As in , we reduce the dimension by using the natural parametrization of the surface. Let and be as in that section, and note that by change of variables = \oint_{\gamma}{\mathbf{F}(\boldsymbol{\psi}(\mathbf{\gamma}))\cdot\,\mathrm{d}\boldsymbol{\psi}(\mathbf{\gamma})} = \oint_{\gamma}{\mathbf{F}(\boldsymbol{\psi}(\mathbf{y}))\cdot J_{\mathbf{y}}(\boldsymbol{\psi})\,\mathrm{d}\gamma} where stands for the Jacobian matrix of at .

Now let be an orthonormal basis in the coordinate directions of .In this article, \mathbf{e}_u= \begin{bmatrix}1 \\ 0 \end{bmatrix} , \mathbf{e}_v = \begin{bmatrix}0 \\ 1 \end{bmatrix} . Note that, in some textbooks on vector analysis, these are assigned to different things. For example, in some text book's notation, can mean the following respectively. In this article, however, these are two completely different things. \mathbf{t}_{u} =\frac {1}{h_u}\frac {\partial \varphi}{\partial u} \, , \mathbf{t}_{v} =\frac {1}{h_v}\frac {\partial \varphi}{\partial v}. Here, h_u = \left\|\frac {\partial \varphi}{\partial u}\right\| , h_v = \left\|\frac {\partial \varphi}{\partial v} \right\|, and the "\| \cdot \|" represents Euclidean norm.

Recognizing that the columns of are precisely the partial derivatives of at , we can expand the previous equation in coordinates as \begin{align} &= \oint_{\gamma}{\mathbf{F}(\boldsymbol{\psi}(\mathbf{y}))\cdot J_{\mathbf{y}}(\boldsymbol{\psi})\mathbf{e}_u(\mathbf{e}_u\cdot\,\mathrm{d}\mathbf{y}) + \mathbf{F}(\boldsymbol{\psi}(\mathbf{y}))\cdot J_{\mathbf{y}}(\boldsymbol{\psi})\mathbf{e}_v(\mathbf{e}_v\cdot\,\mathrm{d}\mathbf{y})} \\ \end{align}


Second step in the elementary proof (defining the pullback)
The previous step suggests we define the function \mathbf{P}(u,v) = \left(\mathbf{F}(\boldsymbol{\psi}(u,v))\cdot\frac{\partial\boldsymbol{\psi}}{\partial u}(u,v)\right)\mathbf{e}_u + \left(\mathbf{F}(\boldsymbol{\psi}(u,v))\cdot\frac{\partial\boldsymbol{\psi}}{\partial v}(u,v) \right)\mathbf{e}_v

Now, if the scalar value functions P_u and P_v are defined as follows, {P_u}(u,v) = \left(\mathbf{F}(\boldsymbol{\psi}(u,v))\cdot\frac{\partial\boldsymbol{\psi}}{\partial u}(u,v)\right) {P_v}(u,v) =\left(\mathbf{F}(\boldsymbol{\psi}(u,v))\cdot\frac{\partial\boldsymbol{\psi}}{\partial v}(u,v) \right) then, \mathbf{P}(u,v) = {P_u}(u,v) \mathbf{e}_u + {P_v}(u,v) \mathbf{e}_v .

This is the pullback of along , and, by the above, it satisfies

We have successfully reduced one side of Stokes' theorem to a 2-dimensional formula; we now turn to the other side.


Third step of the elementary proof (second equation)
First, calculate the partial derivatives appearing in Green's theorem, via the product rule: \begin{align} \frac{\partial P_u}{\partial v} &= \frac{\partial (\mathbf{F}\circ \boldsymbol{\psi})}{\partial v}\cdot\frac{\partial \boldsymbol\psi}{\partial u} + (\mathbf{F}\circ \boldsymbol\psi) \cdot\frac{\partial^2 \boldsymbol\psi}{\partial v \, \partial u} \\5pt \frac{\partial P_v}{\partial u} &= \frac{\partial (\mathbf{F}\circ \boldsymbol{\psi})}{\partial u}\cdot\frac{\partial \boldsymbol\psi}{\partial v} + (\mathbf{F}\circ \boldsymbol\psi) \cdot\frac{\partial^2 \boldsymbol\psi}{\partial u \, \partial v} \end{align}

Conveniently, the second term vanishes in the difference, by equality of mixed partials. So, For all \textbf{a}, \textbf{b} \in \mathbb{R}^{n}, for all A ; n\times n , \textbf{a}\cdot A \textbf{b} = \textbf{a}^\mathsf{T}A \textbf{b} and therefore \textbf{a}\cdot A \textbf{b} = \textbf{b} \cdot A^\mathsf{T} \textbf{a}. \begin{align} \frac{\partial P_v}{\partial u} - \frac{\partial P_u}{\partial v} &= \frac{\partial (\mathbf{F}\circ \boldsymbol\psi)}{\partial u}\cdot\frac{\partial \boldsymbol\psi}{\partial v} - \frac{\partial (\mathbf{F}\circ \boldsymbol\psi)}{\partial v}\cdot\frac{\partial \boldsymbol\psi}{\partial u} \\5pt &= \frac{\partial \boldsymbol\psi}{\partial v}\cdot(J_{\boldsymbol\psi(u,v)}\mathbf{F})\frac{\partial \boldsymbol\psi}{\partial u} - \frac{\partial \boldsymbol\psi}{\partial u}\cdot(J_{\boldsymbol\psi(u,v)}\mathbf{F})\frac{\partial \boldsymbol\psi}{\partial v} && \text{(chain rule)}\\5pt &= \frac{\partial \boldsymbol\psi}{\partial v}\cdot\left(J_{\boldsymbol\psi(u,v)}\mathbf{F}-{(J_{\boldsymbol\psi(u,v)}\mathbf{F})}^{\mathsf{T}}\right)\frac{\partial \boldsymbol\psi}{\partial u} \end{align}

But now consider the matrix in that quadratic form—that is, J_{\boldsymbol\psi(u,v)}\mathbf{F}-(J_{\boldsymbol\psi(u,v)}\mathbf{F})^{\mathsf{T}}. We claim this matrix in fact describes a cross product. Here the superscript " {}^{\mathsf{T}} " represents the .

To be precise, let A=(A_{ij})_{ij} be an arbitrary matrix and let \mathbf{a}= \begin{bmatrix}a_1 \\ a_2 \\ a_3\end{bmatrix} = \begin{bmatrix}A_{32}-A_{23} \\ A_{13}-A_{31} \\ A_{21}-A_{12}\end{bmatrix}

Note that is linear, so it is determined by its action on basis elements. But by direct calculation \begin{align} \left(A-A^{\mathsf{T}}\right)\mathbf{e}_1 &= \begin{bmatrix} 0 \\ a_3 \\ -a_2 \end{bmatrix} = \mathbf{a}\times\mathbf{e}_1\\ \left(A-A^{\mathsf{T}}\right)\mathbf{e}_2 &= \begin{bmatrix} -a_3 \\ 0 \\ a_1 \end{bmatrix} = \mathbf{a}\times\mathbf{e}_2\\ \left(A-A^{\mathsf{T}}\right)\mathbf{e}_3 &= \begin{bmatrix} a_2 \\ -a_1 \\ 0 \end{bmatrix} = \mathbf{a}\times\mathbf{e}_3 \end{align} Here, represents an orthonormal basis in the coordinate directions of \R^3.In this article, \mathbf{e}_1= \begin{bmatrix}1 \\ 0 \\ 0\end{bmatrix} , \mathbf{e}_2 = \begin{bmatrix}0 \\ 1 \\ 0\end{bmatrix} , \mathbf{e}_3 = \begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix} . Note that, in some textbooks on vector analysis, these are assigned to different things.

Thus for any .

Substituting {(J_{\boldsymbol\psi(u,v)}\mathbf{F})} for , we obtain \left({(J_{\boldsymbol\psi(u,v)}\mathbf{F})} - {(J_{\boldsymbol\psi(u,v)}\mathbf{F})}^{\mathsf{T}} \right) \mathbf{x} =(\nabla\times\mathbf{F})\times \mathbf{x}, \quad \text{for all}\, \mathbf{x}\in\R^{3}

We can now recognize the difference of partials as a : \begin{align} \frac{\partial P_v}{\partial u} - \frac{\partial P_u}{\partial v} &= \frac{\partial \boldsymbol\psi}{\partial v}\cdot(\nabla\times\mathbf{F}) \times \frac{\partial \boldsymbol\psi}{\partial u} = (\nabla\times\mathbf{F})\cdot \frac{\partial \boldsymbol\psi}{\partial u} \times \frac{\partial \boldsymbol\psi}{\partial v} \end{align}

On the other hand, the definition of a also includes a triple product—the very same one! \begin{align} \iint_\Sigma (\nabla\times\mathbf{F})\cdot \, d\mathbf{\Sigma} &=\iint_D {(\nabla\times\mathbf{F})(\boldsymbol\psi(u,v))\cdot\frac{\partial \boldsymbol\psi}{\partial u}(u,v)\times \frac{\partial \boldsymbol\psi}{\partial v}(u,v)\,\mathrm{d}u\,\mathrm{d}v} \end{align}

So, we obtain \iint_\Sigma (\nabla\times\mathbf{F})\cdot \,\mathrm{d}\mathbf{\Sigma } = \iint_D \left( \frac{\partial P_v}{\partial u} - \frac{\partial P_u}{\partial v} \right) \,\mathrm{d}u\,\mathrm{d}v


Fourth step of the elementary proof (reduction to Green's theorem)
Combining the second and third steps and then applying Green's theorem completes the proof. Green's theorem asserts the following: for any region D bounded by the Jordans closed curve γ and two scalar-valued smooth functions P_u(u,v), P_v(u,v) defined on D;

= \iint_D \left( \frac{\partial P_v}{\partial u} - \frac{\partial P_u}{\partial v} \right) \,\mathrm{d}u\,\mathrm{d}v

We can substitute the conclusion of STEP2 into the left-hand side of Green's theorem above, and substitute the conclusion of STEP3 into the right-hand side. Q.E.D.


Proof via differential forms
The functions \R^3\to\R^3 can be identified with the differential 1-forms on \R^3 via the map F_x\mathbf{e}_1+F_y\mathbf{e}_2+F_z\mathbf{e}_3 \mapsto F_x\,\mathrm{d}x + F_y\,\mathrm{d}y + F_z\,\mathrm{d}z .

Write the differential 1-form associated to a function as . Then one can calculate that \star\omega_{\nabla\times\mathbf{F}}=\mathrm{d}\omega_{\mathbf{F}}, where is the and \mathrm{d} is the exterior derivative. Thus, by generalized Stokes' theorem,

(1994). 9780817637071, Birkhäuser.


Applications

Irrotational fields
In this section, we will discuss the irrotational field (lamellar vector field) based on Stokes' theorem.

Definition 2-1 (irrotational field). A smooth vector field on an U\subseteq\R^3 is irrotational (lamellar vector field) if .

This concept is very fundamental in mechanics; as we'll prove later, if is irrotational and the domain of is , then is a conservative vector field.


Helmholtz's theorem
In this section, we will introduce a theorem that is derived from Stokes' theorem and characterizes vortex-free vector fields. In classical mechanics and fluid dynamics it is called Helmholtz's theorem.

Theorem 2-1 (Helmholtz's theorem in fluid dynamics). Let U\subseteq\R^3 be an with a lamellar vector field and let be piecewise smooth loops. If there is a function such that

  • TLH0 is piecewise smooth,
  • TLH1 for all ,
  • TLH2 for all ,
  • TLH3 for all .
Then, \int_{c_0} \mathbf{F} \, \mathrm{d}c_0=\int_{c_1} \mathbf{F} \, \mathrm{d}c_1

Some textbooks such as Lawrence call the relationship between and stated in theorem 2-1 as "homotopic" and the function as "homotopy between and ". However, "homotopic" or "homotopy" in above-mentioned sense are different (stronger than) of "homotopic" or "homotopy"; the latter omit condition TLH3. So from now on we refer to homotopy (homotope) in the sense of theorem 2-1 as a tubular homotopy (resp. tubular-homotopic).


Proof of Helmholtz's theorem
In what follows, we abuse notation and use "\oplus" for concatenation of paths in the fundamental groupoid and "\ominus" for reversing the orientation of a path.

Let , and split into four line segments . \begin{align} \gamma_1:0,1 \to D;\quad&\gamma_1(t) = (t, 0) \\ \gamma_2:0,1 \to D;\quad&\gamma_2(s) = (1, s) \\ \gamma_3:0,1 \to D;\quad&\gamma_3(t) = (1-t, 1) \\ \gamma_4:0,1 \to D;\quad&\gamma_4(s) = (0, 1-s) \end{align} so that \partial D = \gamma_1 \oplus \gamma_2 \oplus \gamma_3 \oplus \gamma_4

By our assumption that and are piecewise smooth homotopic, there is a piecewise smooth homotopy \begin{align} \Gamma_i(t) &= H(\gamma_{i}(t)) && i=1, 2, 3, 4 \\ \Gamma(t) &= H(\gamma(t)) =(\Gamma_1 \oplus \Gamma_2 \oplus \Gamma_3 \oplus \Gamma_4)(t) \end{align}

Let be the image of under . That \iint_S \nabla\times\mathbf{F}\, \mathrm{d}S = \oint_\Gamma \mathbf{F}\, \mathrm{d}\Gamma follows immediately from Stokes' theorem. is lamellar, so the left side vanishes, i.e. 0=\oint_\Gamma \mathbf{F}\, \mathrm{d}\Gamma = \sum_{i=1}^4 \oint_{\Gamma_i} \mathbf{F} \, \mathrm{d}\Gamma

As is tubular(satisfying TLH3),\Gamma_2 = \ominus \Gamma_4 and \Gamma_2 = \ominus \Gamma_4. Thus the line integrals along and cancel, leaving 0=\oint_{\Gamma_1} \mathbf{F} \, \mathrm{d}\Gamma +\oint_{\Gamma_3} \mathbf{F} \, \mathrm{d}\Gamma

On the other hand, , c_3 = \ominus \Gamma_3, so that the desired equality follows almost immediately.


Conservative forces
Above Helmholtz's theorem gives an explanation as to why the work done by a conservative force in changing an object's position is path independent. First, we introduce the Lemma 2-2, which is a corollary of and a special case of Helmholtz's theorem.

Lemma 2-2. Let U\subseteq\R^3 be an , with a Lamellar vector field and a piecewise smooth loop . Fix a point , if there is a homotopy such that

  • SC0 is piecewise smooth,
  • SC1 for all ,
  • SC2 for all ,
  • SC3 for all .
Then, \int_{c_0} \mathbf{F} \, \mathrm{d}c_0=0

Above Lemma 2-2 follows from theorem 2–1. In Lemma 2-2, the existence of satisfying SC0 to SC3 is crucial;the question is whether such a homotopy can be taken for arbitrary loops. If is simply connected, such exists. The definition of simply connected space follows:

Definition 2-2 (simply connected space). Let M\subseteq\R^n be non-empty and path-connected. is called if and only if for any continuous loop, there exists a continuous tubular homotopy from to a fixed point ; that is,

  • SC0' is continuous,
  • SC1 for all ,
  • SC2 for all ,
  • SC3 for all .

The claim that "for a conservative force, the work done in changing an object's position is path independent" might seem to follow immediately if the M is simply connected. However, recall that simple-connection only guarantees the existence of a continuous homotopy satisfying SC1-3; we seek a piecewise smooth homotopy satisfying those conditions instead.

Fortunately, the gap in regularity is resolved by the Whitney's approximation theorem.

(2026). 9780821817117, American Mathematical Society. .
See theorems 7 & 8. In other words, the possibility of finding a continuous homotopy, but not being able to integrate over it, is actually eliminated with the benefit of higher mathematics. We thus obtain the following theorem.

Theorem 2-2. Let U\subseteq\R^3 be and simply connected with an irrotational vector field . For all piecewise smooth loops \int_{c_0} \mathbf{F} \, \mathrm{d}c_0 = 0


Maxwell's equations
In the physics of , Stokes' theorem provides the justification for the equivalence of the differential form of the Maxwell–Faraday equation and the Maxwell–Ampère equation and the integral form of these equations. For Faraday's law, Stokes theorem is applied to the electric field, \mathbf{E}: \oint_{\partial\Sigma} \mathbf{E} \cdot \mathrm{d}\boldsymbol{l}= \iint_\Sigma \mathbf{\nabla}\times \mathbf{E} \cdot \mathrm{d} \mathbf{S} .

For Ampère's law, Stokes' theorem is applied to the magnetic field, \mathbf{B}: \oint_{\partial\Sigma} \mathbf{B} \cdot \mathrm{d}\boldsymbol{l}= \iint_\Sigma \mathbf{\nabla}\times \mathbf{B} \cdot \mathrm{d} \mathbf{S} .


Notes
Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs
3s Time